30 research outputs found

    One Action System or Two? Evidence for Common Central Preparatory Mechanisms in Voluntary and Stimulus-Driven Actions

    Get PDF
    Human behavior is comprised of an interaction between intentionally driven actions and reactions to changes in the environment. Existing data are equivocal concerning the question of whether these two action systems are independent, involve different brain regions, or overlap. To address this question we investigated whether the degree to which the voluntary action system is activated at the time of stimulus onset predicts reaction times to external stimuli.Werecorded event-related potentials while participants prepared and executed left- or right-hand voluntary actions, which were occasionally interrupted by a stimulus requiring either a left- or right-hand response. In trials where participants successfully performed the stimulus-driven response, increased voluntary motor preparation was associated with faster responses on congruent trials (where participants were preparing a voluntary action with the same hand that was then required by the target stimulus), and slower responses on incongruent trials. This suggests that early hand-specific activity in medial frontal cortex for voluntary action trials can be used by the stimulus-driven system to speed responding. This finding questions the clear distinction between voluntary and stimulus-driven action systems. © 2011 the authors

    Embodied Gesture Processing: Motor-Based Integration of Perception and Action in Social Artificial Agents

    Get PDF
    A close coupling of perception and action processes is assumed to play an important role in basic capabilities of social interaction, such as guiding attention and observation of others’ behavior, coordinating the form and functions of behavior, or grounding the understanding of others’ behavior in one’s own experiences. In the attempt to endow artificial embodied agents with similar abilities, we present a probabilistic model for the integration of perception and generation of hand-arm gestures via a hierarchy of shared motor representations, allowing for combined bottom-up and top-down processing. Results from human-agent interactions are reported demonstrating the model’s performance in learning, observation, imitation, and generation of gestures

    Cognitive loading affects motor awareness and movement kinematics but not locomotor trajectories during goal-directed walking in a virtual reality environment.

    Get PDF
    The primary purpose of this study was to investigate the effects of cognitive loading on movement kinematics and trajectory formation during goal-directed walking in a virtual reality (VR) environment. The secondary objective was to measure how participants corrected their trajectories for perturbed feedback and how participants' awareness of such perturbations changed under cognitive loading. We asked 14 healthy young adults to walk towards four different target locations in a VR environment while their movements were tracked and played back in real-time on a large projection screen. In 75% of all trials we introduced angular deviations of ±5° to ±30° between the veridical walking trajectory and the visual feedback. Participants performed a second experimental block under cognitive load (serial-7 subtraction, counter-balanced across participants). We measured walking kinematics (joint-angles, velocity profiles) and motor performance (end-point-compensation, trajectory-deviations). Motor awareness was determined by asking participants to rate the veracity of the feedback after every trial. In-line with previous findings in natural settings, participants displayed stereotypical walking trajectories in a VR environment. Our results extend these findings as they demonstrate that taxing cognitive resources did not affect trajectory formation and deviations although it interfered with the participants' movement kinematics, in particular walking velocity. Additionally, we report that motor awareness was selectively impaired by the secondary task in trials with high perceptual uncertainty. Compared with data on eye and arm movements our findings lend support to the hypothesis that the central nervous system (CNS) uses common mechanisms to govern goal-directed movements, including locomotion. We discuss our results with respect to the use of VR methods in gait control and rehabilitation

    Dynamic Social Adaptation of Motion-Related Neurons in Primate Parietal Cortex

    Get PDF
    Social brain function, which allows us to adapt our behavior to social context, is poorly understood at the single-cell level due largely to technical limitations. But the questions involved are vital: How do neurons recognize and modulate their activity in response to social context? To probe the mechanisms involved, we developed a novel recording technique, called multi-dimensional recording, and applied it simultaneously in the left parietal cortices of two monkeys while they shared a common social space. When the monkeys sat near each other but did not interact, each monkey's parietal activity showed robust response preference to action by his own right arm and almost no response to action by the other's arm. But the preference was broken if social conflict emerged between the monkeys—specifically, if both were able to reach for the same food item placed on the table between them. Under these circumstances, parietal neurons started to show complex combinatorial responses to motion of self and other. Parietal cortex adapted its response properties in the social context by discarding and recruiting different neural populations. Our results suggest that parietal neurons can recognize social events in the environment linked with current social context and form part of a larger social brain network

    First Person Experience of Body Transfer in Virtual Reality

    Get PDF
    Background: Altering the normal association between touch and its visual correlate can result in the illusory perception of a fake limb as part of our own body. Thus, when touch is seen to be applied to a rubber hand while felt synchronously on the corresponding hidden real hand, an illusion of ownership of the rubber hand usually occurs. The illusion has also been demonstrated using visuomotor correlation between the movements of the hidden real hand and the seen fake hand. This type of paradigm has been used with respect to the whole body generating out-of-the-body and body substitution illusions. However, such studies have only ever manipulated a single factor and although they used a form of virtual reality have not exploited the power of immersive virtual reality (IVR) to produce radical transformations in body ownership.Principal Findings: Here we show that a first person perspective of a life-sized virtual human female body that appears to substitute the male subjects' own bodies was sufficient to generate a body transfer illusion. This was demonstrated subjectively by questionnaire and physiologically through heart-rate deceleration in response to a threat to the virtual body. This finding is in contrast to earlier experimental studies that assume visuotactile synchrony to be the critical contributory factor in ownership illusions. Our finding was possible because IVR allowed us to use a novel experimental design for this type of problem with three independent binary factors: (i) perspective position (first or third), (ii) synchronous or asynchronous mirror reflections and (iii) synchrony or asynchrony between felt and seen touch.Conclusions: The results support the notion that bottom-up perceptual mechanisms can temporarily override top down knowledge resulting in a radical illusion of transfer of body ownership. The research also illustrates immersive virtual reality as a powerful tool in the study of body representation and experience, since it supports experimental manipulations that would otherwise be infeasible, with the technology being mature enough to represent human bodies and their motion

    fMRI Supports the Sensorimotor Theory of Motor Resonance

    Get PDF
    The neural mechanisms mediating the activation of the motor system during action observation, also known as motor resonance, are of major interest to the field of motor control. It has been proposed that motor resonance develops in infants through Hebbian plasticity of pathways connecting sensory and motor regions that fire simultaneously during imitation or self movement observation. A fundamental problem when testing this theory in adults is that most experimental paradigms involve actions that have been overpracticed throughout life. Here, we directly tested the sensorimotor theory of motor resonance by creating new visuomotor representations using abstract stimuli (motor symbols) and identifying the neural networks recruited through fMRI. We predicted that the network recruited during action observation and execution would overlap with that recruited during observation of new motor symbols. Our results indicate that a network consisting of premotor and posterior parietal cortex, the supplementary motor area, the inferior frontal gyrus and cerebellum was activated both by new motor symbols and by direct observation of the corresponding action. This tight spatial overlap underscores the importance of sensorimotor learning for motor resonance and further indicates that the physical characteristics of the perceived stimulus are irrelevant to the evoked response in the observer

    “Biological Geometry Perception”: Visual Discrimination of Eccentricity Is Related to Individual Motor Preferences

    Get PDF
    In the continuum between a stroke and a circle including all possible ellipses, some eccentricities seem more “biologically preferred” than others by the motor system, probably because they imply less demanding coordination patterns. Based on the idea that biological motion perception relies on knowledge of the laws that govern the motor system, we investigated whether motorically preferential and non-preferential eccentricities are visually discriminated differently. In contrast with previous studies that were interested in the effect of kinematic/time features of movements on their visual perception, we focused on geometric/spatial features, and therefore used a static visual display.In a dual-task paradigm, participants visually discriminated 13 static ellipses of various eccentricities while performing a finger-thumb opposition sequence with either the dominant or the non-dominant hand. Our assumption was that because the movements used to trace ellipses are strongly lateralized, a motor task performed with the dominant hand should affect the simultaneous visual discrimination more strongly. We found that visual discrimination was not affected when the motor task was performed by the non-dominant hand. Conversely, it was impaired when the motor task was performed with the dominant hand, but only for the ellipses that we defined as preferred by the motor system, based on an assessment of individual preferences during an independent graphomotor task.Visual discrimination of ellipses depends on the state of the motor neural networks controlling the dominant hand, but only when their eccentricity is “biologically preferred”. Importantly, this effect emerges on the basis of a static display, suggesting that what we call “biological geometry”, i.e., geometric features resulting from preferential movements is relevant information for the visual processing of bidimensional shapes

    Rubber Hands Feel Touch, but Not in Blind Individuals

    Get PDF
    Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals
    corecore